985 resultados para Robustness and Sensitivity Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Whilst estimation of the marginal (total) causal effect of a point exposure on an outcome is arguably the most common objective of experimental and observational studies in the health and social sciences, in recent years, investigators have also become increasingly interested in mediation analysis. Specifically, upon establishing a non-null total effect of the exposure, investigators routinely wish to make inferences about the direct (indirect) pathway of the effect of the exposure not through (through) a mediator variable that occurs subsequently to the exposure and prior to the outcome. Although powerful semiparametric methodologies have been developed to analyze observational studies, that produce double robust and highly efficient estimates of the marginal total causal effect, similar methods for mediation analysis are currently lacking. Thus, this paper develops a general semiparametric framework for obtaining inferences about so-called marginal natural direct and indirect causal effects, while appropriately accounting for a large number of pre-exposure confounding factors for the exposure and the mediator variables. Our analytic framework is particularly appealing, because it gives new insights on issues of efficiency and robustness in the context of mediation analysis. In particular, we propose new multiply robust locally efficient estimators of the marginal natural indirect and direct causal effects, and develop a novel double robust sensitivity analysis framework for the assumption of ignorability of the mediator variable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper focuses on the development of an aircraft design optimization methodology that models uncertainty and sensitivity analysis in the tradeoff between manufacturing cost, structural requirements, andaircraft direct operating cost.Specifically,ratherthanonlylooking atmanufacturingcost, direct operatingcost is also consideredintermsof the impact of weight on fuel burn, in addition to the acquisition cost to be borne by the operator. Ultimately, there is a tradeoff between driving design according to minimal weight and driving it according to reduced manufacturing cost. Theanalysis of cost is facilitated withagenetic-causal cost-modeling methodology,andthe structural analysis is driven by numerical expressions of appropriate failure modes that use ESDU International reference data. However, a key contribution of the paper is to investigate the modeling of uncertainty and to perform a sensitivity analysis to investigate the robustness of the optimization methodology. Stochastic distributions are used to characterize manufacturing cost distributions, andMonteCarlo analysis is performed in modeling the impact of uncertainty on the cost modeling. The results are then used in a sensitivity analysis that incorporates the optimization methodology. In addition to investigating manufacturing cost variance, the sensitivity of the optimization to fuel burn cost and structural loading are also investigated. It is found that the consideration of manufacturing cost does make an impact and results in a different optimal design configuration from that delivered by the minimal-weight method. However, it was shown that at lower applied loads there is a threshold fuel burn cost at which the optimization process needs to reduce weight, and this threshold decreases with increasing load. The new optimal solution results in lower direct operating cost with a predicted savings of 640=m2 of fuselage skin over the life, relating to a rough order-of-magnitude direct operating cost savings of $500,000 for the fuselage alone of a small regional jet. Moreover, it was found through the uncertainty analysis that the principle was not sensitive to cost variance, although the margins do change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article the multibody simulation software package MADYMO for analysing and optimizing occupant safety design was used to model crash tests for Normal Containment barriers in accordance with EN 1317. The verification process was carried out by simulating a TB31 and a TB32 crash test performed on vertical portable concrete barriers and by comparing the numerical results to those obtained experimentally. The same modelling approach was applied to both tests to evaluate the predictive capacity of the modelling at two different impact speeds. A sensitivity analysis of the vehicle stiffness was also carried out. The capacity to predict all of the principal EN1317 criteria was assessed for the first time: the acceleration severity index, the theoretical head impact velocity, the barrier working width and the vehicle exit box. Results showed a maximum error of 6% for the acceleration severity index and 21% for theoretical head impact velocity for the numerical simulation in comparison to the recorded data. The exit box position was predicted with a maximum error of 4°. For the working width, a large percentage difference was observed for test TB31 due to the small absolute value of the barrier deflection but the results were well within the limit value from the standard for both tests. The sensitivity analysis showed the robustness of the modelling with respect to contact stiffness increase of ±20% and ±40%. This is the first multibody model of portable concrete barriers that can reproduce not only the acceleration severity index but all the test criteria of EN 1317 and is therefore a valuable tool for new product development and for injury biomechanics research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the dynamical interactions of a double pendulum arm and an electromechanical shaker is investigated. The double pendulum is a three degree of freedom system coupled to an RLC circuit based nonlinear shaker through a magnetic field, and the capacitor voltage is a nonlinear function of the instantaneous electric charge. Numerical simulations show the existence of chaotic behavior for some regions in the parameter space and this behaviour is characterized by power spectral density and Lyapunov exponents. The bifurcation diagram is constructed to explore the qualitative behaviour of the system. This kind of electromechanical system is frequently found in robotic systems, and in order to suppress the chaotic motion, the State-Dependent Riccati Equation (SDRE) control and the Nonlinear Saturation control (NSC) techniques are analyzed. The robustness of these two controllers is tested by a sensitivity analysis to parametric uncertainties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Interstellar Boundary Explorer (IBEX) samples the interstellar neutral (ISN) gas flow of several species every year from December through late March when the Earth moves into the incoming flow. The first quantitative analyses of these data resulted in a narrow tube in four-dimensional interstellar parameter space, which couples speed, flow latitude, flow longitude, and temperature, and center values with approximately 3° larger longitude and 3 km s⁻¹ lower speed, but with temperatures similar to those obtained from observations by the Ulysses spacecraft. IBEX has now recorded six years of ISN flow observations, providing a large database over increasing solar activity and using varying viewing strategies. In this paper, we evaluate systematic effects that are important for the ISN flow vector and temperature determination. We find that all models in use return ISN parameters well within the observational uncertainties and that the derived ISN flow direction is resilient against uncertainties in the ionization rate. We establish observationally an effective IBEX-Lo pointing uncertainty of ±0°18 in spin angle and confirm an uncertainty of ±0°1 in longitude. We also show that the IBEX viewing strategy with different spin-axis orientations minimizes the impact of several systematic uncertainties, and thus improves the robustness of the measurement. The Helium Warm Breeze has likely contributed substantially to the somewhat different center values of the ISN flow vector. By separating the flow vector and temperature determination, we can mitigate these effects on the analysis, which returns an ISN flow vector very close to the Ulysses results, but with a substantially higher temperature. Due to coupling with the ISN flow speed along the ISN parameter tube, we provide the temperature Tvisn∞=8710+440/-680 K for Visn∞=26 km s⁻¹ for comparison, where most of the uncertainty is systematic and likely due to the presence of the Warm Breeze.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a model for estimation of average travel time and its variability on signalized urban networks using cumulative plots. The plots are generated based on the availability of data: a) case-D, for detector data only; b) case-DS, for detector data and signal timings; and c) case-DSS, for detector data, signal timings and saturation flow rate. The performance of the model for different degrees of saturation and different detector detection intervals is consistent for case-DSS and case-DS whereas, for case-D the performance is inconsistent. The sensitivity analysis of the model for case-D indicates that it is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research treated the response of underground transportation tunnels to surface blast loads using advanced computer simulation techniques. The influences of important parameters, such as tunnel material, geometrical configuration of segments and surrounding soil were investigated. The findings of this research offer significant new information on the blast performance of underground tunnels and will contribute towards future civil engineering applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting tor energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A mathematical model is developed to simulate the transport and deposition of virus-sized colloids in a cylindrical pore throat considering various processes such as advection, diffusion, colloid-collector surface interactions and hydrodynamic wall effects. The pore space is divided into three different regions, namely, bulk, diffusion and potential regions, based on the dominant processes acting in each of these regions. In the bulk region, colloid transport is governed by advection and diffusion whereas in the diffusion region, colloid mobility due to diffusion is retarded by hydrodynamic wall effects. Colloid-collector interaction forces dominate the transport in the potential region where colloid deposition occurs. The governing equations are non-dimensionalized and solved numerically. A sensitivity analysis indicates that the virus-sized colloid transport and deposition is significantly affected by various pore-scale parameters such as the surface potentials on colloid and collector, ionic strength of the solution, flow velocity, pore size and colloid size. The adsorbed concentration and hence, the favorability of the surface for adsorption increases with: (i) decreasing magnitude and ratio of surface potentials on colloid and collector, (ii) increasing ionic strength and (iii) increasing pore radius. The adsorbed concentration increases with increasing Pe, reaching a maximum value at Pe = 0.1 and then decreases thereafter. Also, the colloid size significantly affects particle deposition with the adsorbed concentration increasing with increasing particle radius, reaching a maximum value at a particle radius of 100 nm and then decreasing with increasing radius. System hydrodynamics is found to have a greater effect on larger particles than on smaller ones. The secondary minimum contribution to particle deposition has been found to increase as the favorability of the surface for adsorption decreases. The sensitivity of the model to a given parameter will be high if the conditions are favorable for adsorption. The results agree qualitatively with the column-scale experimental observations available in the literature. The current model forms the building block in upscaling colloid transport from pore scale to Darcy scale using Pore-Network Modeling. (C) 2014 Elsevier By. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.

Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.

In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.

This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.

The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.

Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.

Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.

Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coccolithophores are the largest source of calcium carbonate in the oceans and are considered to play an important role in oceanic carbon cycles. Current methods to detect the presence of coccolithophore blooms from Earth observation data often produce high numbers of false positives in shelf seas and coastal zones due to the spectral similarity between coccolithophores and other suspended particulates. Current methods are therefore unable to characterise the bloom events in shelf seas and coastal zones, despite the importance of these phytoplankton in the global carbon cycle. A novel approach to detect the presence of coccolithophore blooms from Earth observation data is presented. The method builds upon previous optical work and uses a statistical framework to combine spectral, spatial and temporal information to produce maps of coccolithophore bloom extent. Validation and verification results for an area of the north east Atlantic are presented using an in situ database (N = 432) and all available SeaWiFS data for 2003 and 2004. Verification results show that the approach produces a temporal seasonal signal consistent with biological studies of these phytoplankton. Validation using the in situ coccolithophore cell count database shows a high correct recognition rate of 80% and a low false-positive rate of 0.14 (in comparison to 63% and 0.34 respectively for the established, purely spectral approach). To guide its broader use, a full sensitivity analysis for the algorithm parameters is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the concentration probability distributions of 82 pharmaceutical compounds detected in the effluents of 179 European wastewater treatment plants were computed and inserted into a multimedia fate model. The comparative ecotoxicological impact of the direct emission of these compounds from wastewater treatment plants on freshwater ecosystems, based on a potentially affected fraction (PAF) of species approach, was assessed to rank compounds based on priority. As many pharmaceuticals are acids or bases, the multimedia fate model accounts for regressions to estimate pH-dependent fate parameters. An uncertainty analysis was performed by means of Monte Carlo analysis, which included the uncertainty of fate and ecotoxicity model input variables, as well as the spatial variability of landscape characteristics on the European continental scale. Several pharmaceutical compounds were identified as being of greatest concern, including 7 analgesics/anti-inflammatories, 3 β-blockers, 3 psychiatric drugs, and 1 each of 6 other therapeutic classes. The fate and impact modelling relied extensively on estimated data, given that most of these compounds have little or no experimental fate or ecotoxicity data available, as well as a limited reported occurrence in effluents. The contribution of estimated model input variables to the variance of freshwater ecotoxicity impact, as well as the lack of experimental abiotic degradation data for most compounds, helped in establishing priorities for further testing. Generally, the effluent concentration and the ecotoxicity effect factor were the model input variables with the most significant effect on the uncertainty of output results.